18 research outputs found
SupRB: A Supervised Rule-based Learning System for Continuous Problems
We propose the SupRB learning system, a new Pittsburgh-style learning
classifier system (LCS) for supervised learning on multi-dimensional continuous
decision problems. SupRB learns an approximation of a quality function from
examples (consisting of situations, choices and associated qualities) and is
then able to make an optimal choice as well as predict the quality of a choice
in a given situation. One area of application for SupRB is parametrization of
industrial machinery. In this field, acceptance of the recommendations of
machine learning systems is highly reliant on operators' trust. While an
essential and much-researched ingredient for that trust is prediction quality,
it seems that this alone is not enough. At least as important is a
human-understandable explanation of the reasoning behind a recommendation.
While many state-of-the-art methods such as artificial neural networks fall
short of this, LCSs such as SupRB provide human-readable rules that can be
understood very easily. The prevalent LCSs are not directly applicable to this
problem as they lack support for continuous choices. This paper lays the
foundations for SupRB and shows its general applicability on a simplified model
of an additive manufacturing problem.Comment: Submitted to the Genetic and Evolutionary Computation Conference 2020
(GECCO 2020
Weighted mutation of connections to mitigate search space limitations in Cartesian Genetic Programming
This work presents and evaluates a novel modification to existing mutation operators for Cartesian Genetic Programming (CGP).
We discuss and highlight a so far unresearched limitation of how CGP explores its search space which is caused by certain nodes being inactive for long periods of time.
Our new mutation operator is intended to avoid this by associating each node with a dynamically changing weight. When mutating a connection between nodes, those weights are then used to bias the probability distribution in favour of inactive nodes.
This way, inactive nodes have a higher probability of becoming active again.
We include our mutation operator into two variants of CGP and benchmark both versions on four Boolean learning tasks.
We analyse the average numbers of iterations a node is inactive and show that our modification has the intended effect on node activity.
The influence of our modification on the number of iterations until a solution is reached is ambiguous if the same number of nodes is used as in the baseline without our modification.
However, our results show that our new mutation operator leads to fewer nodes being required for the same performance; this saves CPU time in each iteration
Approaches for rule discovery in a learning classifier system
To fill the increasing demand for explanations of decisions made by automated prediction systems, machine learning (ML) techniques that produce inherently transparent models are directly suited. Learning Classifier Systems (LCSs), a family of rule-based learners, produce transparent models by design. However, the usefulness of such models, both for predictions and analyses, heavily depends on the placement and selection of rules (combined constituting the ML task of model selection). In this paper, we investigate a variety of techniques to efficiently place good rules within the search space based on their local prediction errors as well as their generality. This investigation is done within a specific LCS, named SupRB, where the placement of rules and the selection of good subsets of rules are strictly separated in contrast to other LCSs where these tasks sometimes blend. We compare a Random Search, (1,λ)-ES and three Novelty Search variants. We find that there is a definitive need to guide the search based on some sensible criteria, i.e. error and generality, rather than just placing rules randomly and selecting better performing ones but also find that Novelty Search variants do not beat the easier to understand (1,λ)-ES